Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(zfs-localpv): add option for choosing between refquota and quota #542

Merged
merged 7 commits into from
Sep 21, 2024

Conversation

cinapm
Copy link
Contributor

@cinapm cinapm commented Jun 13, 2024

Why is this PR required? What issue does it fix?:
resolve this issue: #423

What this PR does?:
this PR adds an option for choosing between refquota or quota and uses this value as zfs command options

Checklist:

  • Fixes #
  • PR Title follows the convention of <type>(<scope>): <subject>
  • Has the change log section been updated?
  • Commit has unit tests
  • Commit has integration tests
  • (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
  • (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:

@codecov-commenter
Copy link

codecov-commenter commented Jun 14, 2024

⚠️ Please install the 'codecov app svg image' to ensure uploads and comments are reliably processed by Codecov.

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 96.37%. Comparing base (e3d55d7) to head (ae2b7c2).
Report is 2 commits behind head on develop.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@           Coverage Diff            @@
##           develop     #542   +/-   ##
========================================
  Coverage    96.37%   96.37%           
========================================
  Files            1        1           
  Lines          496      496           
========================================
  Hits           478      478           
  Misses          14       14           
  Partials         4        4           
Flag Coverage Δ
bddtests 96.37% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

cmd/main.go Outdated Show resolved Hide resolved
pkg/config/config.go Outdated Show resolved Hide resolved
cmd/main.go Outdated Show resolved Hide resolved
@Abhinandan-Purkait
Copy link
Member

Thanks @cinapm. Looks good to me. I will let other folks review this. We will target this for this upcoming release.

@dsharma-dc
Copy link

Have we tested this change by trying to apply a quota, and then a refquota underneath that dataset?
Add a simple test as well perhaps.

Copy link
Member

@niladrih niladrih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This configuration option should be toggle-able through a storageclass.

@Abhinandan-Purkait
Copy link
Member

@cinapm Can you please let us know your plans on this PR? I would request you to consider the suggestion made by @niladrih here.

@cinapm
Copy link
Contributor Author

cinapm commented Jul 11, 2024

This configuration option should be toggle-able through a storageclass.

i have added option to add quotatype parameter on storageclass, the default value is quota

@cinapm
Copy link
Contributor Author

cinapm commented Jul 11, 2024

@cinapm Can you please let us know your plans on this PR? I would request you to consider the suggestion made by @niladrih here.

instead of using config for local-pv, I have changed the controller to accept and use quotatype parameter on storageclass

@Abhinandan-Purkait
Copy link
Member

@cinapm Thanks, we will review this. The team needs to decide on this because this changes the CRDs and changing anything on the CRDs would need a CRD version bump which now raises the question of upgrades and migration. cc @niladrih @avishnu @tiagolobocastro

@abuisine
Copy link
Contributor

I would strongly suggest to verify the impact on the thick provisioning side of things :
with a thinprovision: "no" the property reservation is set, it would probably be mandatory to switch to refreservation at the same time that refquota is used.

Let me explain the potential problem with the typical usage of snapshots :

  • snapshots are considered in quota but are not when using refquota
  • snapshots are considered in reservation but are not when using refreservation

You could end up with a non functionnal thick provisioning while mixing refquota with reservation

Let me take an exemple with a workload that requires 10GB of free space, with thick provisioning :

  • a PersitentVolume of 10GB
  • refquota and reservation being set (through thinprovision: "no")
  • with refquota, you can create an unlimited amount of GB of snapshots
  • with reservation, snapshots are considered as consuming space
  • this would mean that with 3GB of space being used and 9GB of snapshots, there would be 7GB of free space from a zfs FS standpoint, and that reservation would be exceeded by 2GB
  • so 2GB would be thin provisioned whereas the configuration is set to thick provisioning

Using refreservation while setting refquota and thinprovision: "no" would solve the issue.

@abuisine
Copy link
Contributor

abuisine commented Jul 15, 2024

Also, I would like to confirm the huge value of this PR : without refquota and refreservation the usage of snapshots becomes almost impossible as they are deduced from free space.
From an operational standpoint, it basically means that backups can impact production.
We never used snapshots with this CSI for this reason.

This PR (with refreservation) would enable safe usage of snapshots in production.

@cinapm
Copy link
Contributor Author

cinapm commented Jul 15, 2024

I would strongly suggest to verify the impact on the thick provisioning side of things : with a thinprovision: "no" the property reservation is set, it would probably be mandatory to switch to refreservation at the same time that refquota is used.

Let me explain the potential problem with the typical usage of snapshots :

  • snapshots are considered in quota but are not when using refquota
  • snapshots are considered in reservation but are not when using refreservation

You could end up with a non functionnal thick provisioning while mixing refquota with reservation

Let me take an exemple with a workload that requires 10GB of free space, with thick provisioning :

  • a PersitentVolume of 10GB
  • refquota and reservation being set (through thinprovision: "no")
  • with refquota, you can create an unlimited amount of GB of snapshots
  • with reservation, snapshots are considered as consuming space
  • this would mean that with 3GB of space being used and 9GB of snapshots, there would be 7GB of free space from a zfs FS standpoint, and that reservation would be exceeded by 2GB
  • so 2GB would be thin provisioned whereas the configuration is set to thick provisioning

Using refreservation while setting refquota and thinprovision: "no" would solve the issue.

Your point is very valid, and based on the points you mentioned, I have made the changes. now when there is option thinprovision: "no" , the reservation will be set for quota and refreservation will be set for refquota

pkg/zfs/zfs_util.go Outdated Show resolved Hide resolved
Copy link
Member

@Abhinandan-Purkait Abhinandan-Purkait left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@cinapm A couple of issues here I see you have changed the v1/zfsvolume.go file and also the crd and operator yamls. The CRDs are supposed to be generated from the v1/zfsvolume.go file and are not supposed to be changed manually. It would be good if you undo the yaml changes and do it via the script. Also, since you have changed the v1/zfsvolume.go we also need to do the kubegen again.

make bootstrap
make kubegen
make manifests

One more thing, I see you changed the zfs-restore and zfs-snapshot crds as well, but I don't see any equivalent changes on their corresponding go files. So if you want to change them as well, you need to change the go files first and run the above scripts.

@cinapm
Copy link
Contributor Author

cinapm commented Jul 18, 2024

make manifests
@Abhinandan-Purkait

i didn't change them manually, first of all I changed v1/zfsvolume.go and after that I have used following commands:
make bootstrap make manifests

the make manifests command has changed zfs-restore and zfs-snapshot crds automatically based on changes in v1/zfsvolume.go

@Abhinandan-Purkait
Copy link
Member

Abhinandan-Purkait commented Jul 19, 2024

make manifests
@Abhinandan-Purkait

i didn't change them manually, first of all I changed v1/zfsvolume.go and after that I have used following commands: make bootstrap make manifests

the make manifests command has changed zfs-restore and zfs-snapshot crds automatically based on changes in v1/zfsvolume.go

Okay, I see the embedded volSpec in the restores and snapshot structures. Okay. It would help if you rebased your branch, there were some changes merged on the deepcopy methods. Also please run make kubegen although it won't change anything in your case, but it's worth checking.

You also need to copy the crd changes into the helm chart crd templates, as that's how crds get installed. Check https://github.com/openebs/zfs-localpv/tree/develop/deploy/helm/charts/charts/crds/templates

The other changes look good to me. It might be worth testing upgrade once before we take this in. Example, installing the previous version of zfs and upgrading it to this version and see the crd changes. Thanks

@cinapm
Copy link
Contributor Author

cinapm commented Jul 19, 2024

make manifests
@Abhinandan-Purkait

i didn't change them manually, first of all I changed v1/zfsvolume.go and after that I have used following commands: make bootstrap make manifests
the make manifests command has changed zfs-restore and zfs-snapshot crds automatically based on changes in v1/zfsvolume.go

Okay, I see the embedded volSpec in the restores and snapshot structures. Okay. It would help if you rebased your branch, there were some changes merged on the deepcopy methods. Also please run make kubegen although it won't change anything in your case, but it's worth checking.

You also need to copy the crd changes into the helm chart crd templates, as that's how crds get installed. Check https://github.com/openebs/zfs-localpv/tree/develop/deploy/helm/charts/charts/crds/templates

The other changes look good to me. It might be worth testing upgrade once before we take this in. Example, installing the previous version of zfs and upgrading it to this version and see the crd changes. Thanks

@Abhinandan-Purkait

as you said, first of all, I have rebased my branch and after that I added crds changes into the corresponding helm manifests

Copy link
Member

@Abhinandan-Purkait Abhinandan-Purkait left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Test the upgrades before this can be taken in.

@sinhaashish
Copy link
Member

@cinapm It will be good if you add the test case to check the refquota and refreservation
Something on this lines is alredy present here

func generateThinProvisionParams(property map[string]string) {

You can find the details about this is this issue #560
The conclusion is here #560 (comment)

@Abhinandan-Purkait
Copy link
Member

@cinapm Please rebase and run make manifests a lot of changes went in on the testing and ci side of things. Thanks

@Abhinandan-Purkait
Copy link
Member

@cinapm I see there isn't any activity here for a while, is there any blocker that you are facing that you need help with?

@cinapm
Copy link
Contributor Author

cinapm commented Sep 6, 2024

@cinapm I see there isn't any activity here for a while, is there any blocker that you are facing that you need help with?

I have rebased the develop branch and also run make manifests

@cinapm
Copy link
Contributor Author

cinapm commented Sep 6, 2024

@cinapm Please rebase and run make manifests a lot of changes went in on the testing and ci side of things. Thanks

@Abhinandan-Purkait
I have resolved them

Copy link
Member

@Abhinandan-Purkait Abhinandan-Purkait left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Abhinandan-Purkait
Copy link
Member

@sinhaashish
Copy link
Member

I am approving it , but better add the test here here

@sinhaashish
Copy link
Member

@cinapm Just making sure there’s nothing pending on your side. If everything looks good, we can go ahead and merge it

@cinapm
Copy link
Contributor Author

cinapm commented Sep 18, 2024

@cinapm Just making sure there’s nothing pending on your side. If everything looks good, we can go ahead and merge it

@sinhaashish
there is nothing on my side, everything is ok

@Abhinandan-Purkait
Copy link
Member

@cinapm Thanks. Merging the PR.

@Abhinandan-Purkait Abhinandan-Purkait merged commit f185333 into openebs:develop Sep 21, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants